141 research outputs found

    Sieving parton distribution function moments via the moment problem

    Full text link
    Reconstructing parton distribution function (PDF) from the corresponding Mellin moments belongs to a classical mathematical problem: the moment problem, which has been overlooked for years in the contemporary hadron community. We propose the strategy to sieve the moments leveraging PDF properties such as continuity, unimodality, and symmetry. Through an error-inclusive sifting process, we refine three sets of lattice QCD PDF moments. This refinement significantly reduces the errors, particularly for higher order moments, and locates the peak of PDF simultaneously. As our method is universally applicable to PDF moments from any methodology, we strongly advocate its integration into all PDF moment calculations.Comment: 6 pages, 2 figure

    Reconstructing parton distribution function based on maximum entropy method

    Full text link
    A new method based on the maximum entropy principle for reconstructing the parton distribution function (PDF) from moments is proposed. Unlike traditional methods, the new method no longer needs to introduce any artificial assumptions. For the case of moments with errors, we introduce Gaussian functions to soften the constraints of moments. A series of tests are conducted to comprehensively evaluate the validity and reconstruction efficiency of this new method. And these tests indicate that our method is reasonable and can achieve high-quality reconstruction with at least the first six moments as input. Finally, we select a set of lattice QCD results regarding moments as input and provide reasonable reconstruction results.Comment: 6 pages, 8 figure

    Pion scalar, vector and tensor form factors from a contact interaction

    Get PDF
    The pion scalar, vector and tensor form factors are calculated within a symmetry-preserving contact interaction model (CI) of quantum chromodynamics (QCD), encompassed within a Dyson-Schwinger and Bethe-Salpeter equations approach. In addition to the traditional rainbow-ladder truncation, a modified interaction kernel for the Bethe-Salpeter equation is adopted. The implemented kernel preserves the vector and axial-vector Ward-Takahashi identities, while also providing additional freedom. Consequently, new tensor structures are generated in the corresponding interaction vertices, shifting the location of the mass poles appearing in the quark-photon and quark tensor vertex and yielding a notorious improvement in the final results. Despite the simplicity of the CI, the computed form factors and radii are compatible with recent lattice QCD simulations.Comment: 11 pages, 8 figure

    Multi-Level Factorisation Net for Person Re-Identification

    Get PDF
    Key to effective person re-identification (Re-ID) is modelling discriminative and view-invariant factors of person appearance at both high and low semantic levels. Recently developed deep Re-ID models either learn a holistic single semantic level feature representation and/or require laborious human annotation of these factors as attributes. We propose Multi-Level Factorisation Net (MLFN), a novel network architecture that factorises the visual appearance of a person into latent discriminative factors at multiple semantic levels without manual annotation. MLFN is composed of multiple stacked blocks. Each block contains multiple factor modules to model latent factors at a specific level, and factor selection modules that dynamically select the factor modules to interpret the content of each input image. The outputs of the factor selection modules also provide a compact latent factor descriptor that is complementary to the conventional deeply learned features. MLFN achieves state-of-the-art results on three Re-ID datasets, as well as compelling results on the general object categorisation CIFAR-100 dataset.Comment: To Appear at CVPR201

    L1 Graph Based Sparse Model for Label De-noising

    Get PDF

    Scalable and Effective Deep CCA via Soft Decorrelation

    Get PDF
    Recently the widely used multi-view learning model, Canonical Correlation Analysis (CCA) has been generalised to the non-linear setting via deep neural networks. Existing deep CCA models typically first decorrelate the feature dimensions of each view before the different views are maximally correlated in a common latent space. This feature decorrelation is achieved by enforcing an exact decorrelation constraint; these models are thus computationally expensive due to the matrix inversion or SVD operations required for exact decorrelation at each training iteration. Furthermore, the decorrelation step is often separated from the gradient descent based optimisation, resulting in sub-optimal solutions. We propose a novel deep CCA model Soft CCA to overcome these problems. Specifically, exact decorrelation is replaced by soft decorrelation via a mini-batch based Stochastic Decorrelation Loss (SDL) to be optimised jointly with the other training objectives. Extensive experiments show that the proposed soft CCA is more effective and efficient than existing deep CCA models. In addition, our SDL loss can be applied to other deep models beyond multi-view learning, and obtains superior performance compared to existing decorrelation losses.Comment: To Appear at CVPR201

    Disjoint Label Space Transfer Learning with Common Factorised Space

    Get PDF
    In this paper, a unified approach is presented to transfer learning that addresses several source and target domain label-space and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.Comment: AAAI-1
    • …
    corecore